43 research outputs found

    Teaching agents to learn: from user study to implementation

    Get PDF
    Graphical user interfaces have helped center computer use on viewing and editing, rather than on programming. Yet the need for end-user programming continues to grow. Software developers have responded to the demand with a barrage of customizable applications and operating systems. But the learning curve associated with a high level of customizability-even in GUI-based operating systems-often prevents users from easily modifying their software. Ironically, the question has become, "What is the easiest way for end users to program?" Perhaps the best way to customize a program, given current interface and software design, is for users to annotate tasks-verbally or via the keyboard-as they are executing them. Experiments have shown that users can "teach" a computer most easily by demonstrating a desired behavior. But the teaching approach raises new questions about how the system, as a learning machine, will correlate, generalize, and disambiguate a user's instructions. To understand how best to create a system that can learn, the authors conducted an experiment in which users attempt to train an intelligent agent to edit a bibliography. Armed with the results of these experiments, the authors implemented an interactive machine learning system, which they call Configurable Instructible Machine Architecture. Designed to acquire behavior concepts from few examples, Cima keeps users informed and allows them to influence the course of learning. Programming by demonstration reduces boring, repetitive work. Perhaps the most important lesson the authors learned is the value of involving users in the design process. By testing and critiquing their design ideas, users keep the designers focused on their objective: agents that make computer-based work more productive and more enjoyable

    Compression by induction of hierarchical grammars

    Get PDF
    This paper describes a technique that develops models of symbol sequences in the form of small, human-readable, hierarchical grammars. The grammars are both semantically plausible and compact. The technique can induce structure from a variety of different kinds of sequence, and examples are given of models derived from English text, C source code and a file of numeric data. This paper explains the grammatical induction technique, demonstrates its application to three very different sequences, evaluates its compression performance, and concludes by briefly discussing its use as method of knowledge acquisition

    Sketch-n-Sketch: Output-Directed Programming for SVG

    Full text link
    For creative tasks, programmers face a choice: Use a GUI and sacrifice flexibility, or write code and sacrifice ergonomics? To obtain both flexibility and ease of use, a number of systems have explored a workflow that we call output-directed programming. In this paradigm, direct manipulation of the program's graphical output corresponds to writing code in a general-purpose programming language, and edits not possible with the mouse can still be enacted through ordinary text edits to the program. Such capabilities provide hope for integrating graphical user interfaces into what are currently text-centric programming environments. To further advance this vision, we present a variety of new output-directed techniques that extend the expressive power of Sketch-n-Sketch, an output-directed programming system for creating programs that generate vector graphics. To enable output-directed interaction at more stages of program construction, we expose intermediate execution products for manipulation and we present a mechanism for contextual drawing. Looking forward to output-directed programming beyond vector graphics, we also offer generic refactorings through the GUI, and our techniques employ a domain-agnostic provenance tracing scheme. To demonstrate the improved expressiveness, we implement a dozen new parametric designs in Sketch-n-Sketch without text-based edits. Among these is the first demonstration of building a recursive function in an output-directed programming setting.Comment: UIST 2019 Paper + Appendi

    Learning agents: from user study to implementation

    Get PDF
    Learning agents acquire procedures by being taught rather than programmed. To teach effectively, users prefer communicating in richer and more flexible ways than traditional computer dialogs allow. This paper describes the design, implementation and evaluation of a learning agent. In contrast to most Artificial Intelligence projects, the design centers on a user study, with a human-simulated agent to discover the interactions that people find natural. Our work shows that users instinctively communication via "hints," or partially-specified, ambiguous, instructions. Hints may be input verbally, or by pointing, or by selecting from menus. They may be unsolicited, or arise in response to a query from the agent. We develop a theory of instruction types for an agent to interpret them. The implementation demonstrates that computers can learn from examples and ambiguous hints. Finally, an evaluation reveals the extent to which our system meets the original design requirements

    Instructible agents

    No full text
    Bibliography: p. 297-308

    Inductive task modeling for user interface customization

    No full text
    This paper describes ActionStreams, a system for inducing task models from observations of user activity. The model can represent several task structures: hierarchy, variable sequencing, mandatory vs. optional actions, and interleaved sequences. The task models can be used for just-in-time automation and for guidance in user interface design

    INDUCING PROCEDURES INTERACTIVELY ADVENTURES WITH METAMOUSE

    No full text
    Direct manipulation interfaces have greatly extended the class of casual computer users and encouraged them to conceptualize the system through metaphors. They have not, however, successfully incorporated facilities for end-user programming without breaking out of the direct manipulation paridigm. This thesis supports the contention that "teaching" provides an appropriate metaphor for programming in such an environment. It presents a system for inducing procedures that enables users of a graphics editor to teach it routine tasks by working through example traces. A central problem in the design is to meet the requirements for instructibility without imposing excessive demands on the teacher. A key component of the system is its teaching metaphor, a graphical apprentice called Metamouse. Metamouse is the target of the teacher's demonstrations. It is an eager learner designed to encourage constructive methods, clarify ambiguous situations, reduce errors and extraneous activity, and discourage free variation in teaching. Its behaviour is expected to be understood by users at a metaphorical, intentional level rather than from a precise specification. Metamouse has been fully designed but not yet fully implemented. However, a pilot system has induced procedures with variables, generalized actions, conditional branches and loops. Its ability to reduce errors and extraneous activity by prediction, and to identify underspecification, has been demonstrated. Tests showed that the metaphor is easily understood. Consequently the thesis argues that it is feasible for a system to induce procedures interactively from casual users. This significantly broadens the scope of application of machine learning techniques and opens new areas of research in knowledge acquisition. It facilitates the investigation of intelligent user interfaces and, last but not least, benefits the many users of interactive graphics systems.We are currently acquiring citations for the work deposited into this collection. We recognize the distribution rights of this item may have been assigned to another entity, other than the author(s) of the work.If you can provide the citation for this work or you think you own the distribution rights to this work please contact the Institutional Repository Administrator at [email protected]

    Inducing procedures interactively: adventures with metamouse

    No full text
    Bibliography: p. 117-119

    Interactive Concept Learning for End-User Applications

    Get PDF
    Personalizable software agents will learn new tasks from their users. This implies being able to learn from instructions users might give: examples, yes/no responses, and ambiguous, incomplete hints. Agents should also exploit background knowledge customized for applications such as drawing, word processing and form-filling. The task models that agents learn describe data, actions and their context. Learning about data from examples and hints is the subject of this paper. The Cima learning system combines evidence from examples, task knowledge and user hints to form Disjunctive Normal Form (DNF) rules for classifying, generating or modifying data. Cima's dynamic bias manager generates candidate features (attribute values, functions or relations), from which its DNF learning algorithm selects relevant features and forms the rules. The algorithm is based on a classic greedy method, with two enhancements. First, the standard learning criterion, correct classification, is augmented with a..
    corecore